Autonomous agents take the first part of their names very seriously and don't necessarily do what their humans tell them to do - or not to do. But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems - including older AI systems - and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
During the AI Impact Summit in India, the UK government announced that £27m is now available for AI alignment research, backing some 60 projects. The project combines grant funding for research, access to compute infrastructure and ongoing academic mentorship from AISI's own leading scientists in the field to drive progress in alignment research. Without continued progress in this area, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.
Reload is a platform that lets organizations manage their AI agents across teams and departments. Companies can connect agents, regardless of who built them (whether by a third party or internally), assign them roles and permissions, and track the work they perform. "Reload acts like the system of record for AI employees, providing visibility, coordination, and oversight as agents operate across functions," said Asare, the company's CEO.
It's gonna be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century,
On a personal basis, that means people using AI services want to be able to veto big decisions such as making payments, accessing or using contact details, changing account details, placing orders, or even just seeking clarity during a decision-making process. Extend this way of thinking to the working environment and the resistance is likely to be equally strong in professional settings.
Anthropic pledged to keep its large language model, Claude, ad-free, alongside a commercial poking at OpenAI, which is testing ads in ChatGPT. OpenAI CEO Sam Altman fired back with a 420-word post on X, calling the ad "dishonest." Altman was already fending off rumors about OpenAI's relationship with Nvidia, after the Wall Street Journal reported the chipmaker was pulling back from a proposed $100 billion investment. Reuters' sources said OpenAI has been exploring alternatives to Nvidia's chips.
Moca has open-sourced Agent Definition Language (ADL), a vendor-neutral specification intended to standardize how AI agents are defined, reviewed, and governed across frameworks and platforms. The project is released under the Apache 2.0 license and is positioned as a missing "definition layer" for AI agents, comparable to the role OpenAPI plays for APIs. ADL provides a declarative format for defining AI agents, including their identity, role, language model setup, tools, permissions, RAG data access, dependencies, and governance metadata like ownership and version history.
But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won't do exactly this; we would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that. I guess it's on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren't real, but a Super Bowl ad is not where I would expect it.
AI is no longer just a tool for efficiency; it's becoming a growth engine for the enterprise. With UK AI investment set to increase significantly in the next four years, success will hinge on integrating AI into core business strategies and reskilling the workforce. Organisations that act decisively, with the appropriate governance and controls in place for AI, will be the ones defining competitive advantage tomorrow.